216. GKE掛載ReadWrite Many的PVC
緣由
GKE上面的pvc是不支援ReadWrite Many的。
用途是全部的Log寫到同一個資料夾,
然後再用filebeat擷取到EFK。
主題
這邊要做的是自建一個NFS server。
deploy裡面,也可以直接在GCE上面建立一個硬碟,
然後在deploy上面指定。就是下面這段,
這原本是前任的方式,我直接改掉了。
直接在一個yaml裡面全部弄好。
gcePersistentDisk:
pdName: gke-log-nfs-disk
fsType: ext4
NFS-server.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: nfs
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
namespace: nfs
spec:
storageClassName: "standard"
resources:
requests:
storage: 100Gi
accessModes:
- ReadWriteOnce
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
namespace: nfs
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: nfs-pvc
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs-pvc
# gcePersistentDisk:
# pdName: gke-log-nfs-disk
# fsType: ext4
---
apiVersion: v1
kind: Service
metadata:
name: nfs-server
namespace: nfs
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
再來是掛載一個provisioner
nfs-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: nfs
labels:
app: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
# image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-log
- name: NFS_SERVER
value: nfs-server.nfs.svc.cluster.local
- name: NFS_PATH
value: /
volumes:
- name: nfs-client-root
nfs:
server: nfs-server.nfs.svc.cluster.local
path: /
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-log
provisioner: nfs-log
parameters:
archiveOnDelete: "false"
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: nfs
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
要使用的話,
建立一個pvc, storageClassName改成上面建立的storageClass
其他deploy掛載方式,跟掛pvc一樣。
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: logs-nfs-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-log
resources:
requests:
storage: 100Gi
ref.